Goto

Collaborating Authors

 interventional radiology


PARROT: An Open Multilingual Radiology Reports Dataset

Guellec, Bastien Le, Adambounou, Kokou, Adams, Lisa C, Agripnidis, Thibault, Ahn, Sung Soo, Chalal, Radhia Ait, Antonoli, Tugba Akinci D, Amouyel, Philippe, Andersson, Henrik, Bentegeac, Raphael, Benzoni, Claudio, Blandino, Antonino Andrea, Busch, Felix, Can, Elif, Cau, Riccardo, Cavallo, Armando Ugo, Chavihot, Christelle, Chiquete, Erwin, Cuocolo, Renato, Divjak, Eugen, Ivanac, Gordana, Macek, Barbara Dziadkowiec, Elogne, Armel, Fanni, Salvatore Claudio, Ferrarotti, Carlos, Fossataro, Claudia, Fossataro, Federica, Fulek, Katarzyna, Fulek, Michal, Gac, Pawel, Gachowska, Martyna, Juarez, Ignacio Garcia, Gatti, Marco, Gorelik, Natalia, Goulianou, Alexia Maria, Hamroun, Aghiles, Herinirina, Nicolas, Kraik, Krzysztof, Krupka, Dominik, Holay, Quentin, Kitamura, Felipe, Klontzas, Michail E, Kompanowska, Anna, Kompanowski, Rafal, Lefevre, Alexandre, Lemke, Tristan, Lindholz, Maximilian, Muller, Lukas, Macek, Piotr, Makowski, Marcus, Mannacio, Luigi, Meddeb, Aymen, Natale, Antonio, Edzang, Beatrice Nguema, Ojeda, Adriana, Park, Yae Won, Piccione, Federica, Ponsiglione, Andrea, Poreba, Malgorzata, Poreba, Rafal, Prucker, Philipp, Pruvo, Jean Pierre, Pugliesi, Rosa Alba, Rabemanorintsoa, Feno Hasina, Rafailidis, Vasileios, Resler, Katarzyna, Rotkegel, Jan, Saba, Luca, Siebert, Ezann, Stanzione, Arnaldo, Tekin, Ali Fuat, Yanchapaxi, Liz Toapanta, Triantafyllou, Matthaios, Tsaoulia, Ekaterini, Vassalou, Evangelia, Vernuccio, Federica, Wasselius, Johan, Wang, Weilang, Urban, Szymon, Wlodarczak, Adrian, Wlodarczak, Szymon, Wysocki, Andrzej, Xu, Lina, Zatonski, Tomasz, Zhang, Shuhang, Ziegelmayer, Sebastian, Kuchcinski, Gregory, Bressem, Keno K

arXiv.org Artificial Intelligence

Rationale and Objectives: To develop and validate PARROT (Polyglottal Annotated Radiology Reports for Open Testing), a large, multicentric, open-access dataset of fictional radiology reports spanning multiple languages for testing natural language processing applications in radiology. Materials and Methods: From May to September 2024, radiologists were invited to contribute fictional radiology reports following their standard reporting practices. Contributors provided at least 20 reports with associated metadata including anatomical region, imaging modality, clinical context, and for non-English reports, English translations. All reports were assigned ICD-10 codes. A human vs. AI report differentiation study was conducted with 154 participants (radiologists, healthcare professionals, and non-healthcare professionals) assessing whether reports were human-authored or AI-generated. Results: The dataset comprises 2,658 radiology reports from 76 authors across 21 countries and 13 languages. Reports cover multiple imaging modalities (CT: 36.1%, MRI: 22.8%, radiography: 19.0%, ultrasound: 16.8%) and anatomical regions, with chest (19.9%), abdomen (18.6%), head (17.3%), and pelvis (14.1%) being most prevalent. In the differentiation study, participants achieved 53.9% accuracy (95% CI: 50.7%-57.1%) in distinguishing between human and AI-generated reports, with radiologists performing significantly better (56.9%, 95% CI: 53.3%-60.6%, p<0.05) than other groups. Conclusion: PARROT represents the largest open multilingual radiology report dataset, enabling development and validation of natural language processing applications across linguistic, geographic, and clinical boundaries without privacy constraints.


No longer science fiction: Artificial intelligence makes its way to interventional radiology

#artificialintelligence

artificial intelligence, noun: The capacity of computers or other machines to exhibit or simulate intelligent behavior; the field of study …


Radiology: Artificial Intelligence

#artificialintelligence

Nooshin Abbasi is a post-doctoral research fellow at Brigham and Women's Hospital, Harvard Medical School, and a former research fellow at the Montreal Neurological Institute, McGill University. Her research interests include brain imaging, evidence-based imaging, and bioinformatics, with a focus on applying machine learning tools to large clinical and imaging datasets. Michael Dohopolski is a PGY5 radiation oncology resident. He has worked with Dr. Wang and Dr. Jiang at UT Southwestern on machine learning based clinical decision-making support tools with an emphasis on single prediction uncertainty estimation. She is in the Department of Neurosurgery, University of Pennsylvania, and Division of Neurosurgery, Children's Hospital of Philadelphia.


Artificial intelligence virtual consultant helps deliver better patient care

#artificialintelligence

WASHINGTON, DC (March 8, 2017)--Interventional radiologists at the University of California at Los Angeles (UCLA) are using technology found in self-driving cars to power a machine learning application that helps guide patients' interventional radiology care, according to research presented today at the Society of Interventional Radiology's 2017 Annual Scientific Meeting. The researchers used cutting-edge artificial intelligence to create a "chatbot" interventional radiologist that can automatically communicate with referring clinicians and quickly provide evidence-based answers to frequently asked questions. This allows the referring physician to provide real-time information to the patient about the next phase of treatment, or basic information about an interventional radiology treatment. "We theorized that artificial intelligence could be used in a low-cost, automated way in interventional radiology as a way to improve patient care," said Edward W. Lee, M.D., Ph.D., assistant professor of radiology at UCLA's David Geffen School of Medicine and one of the authors of the study. "Because artificial intelligence has already begun transforming many industries, it has great potential to also transform health care."